Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 171
Filter
1.
Yakugaku Zasshi ; 143(6): 491-495, 2023.
Article in Japanese | MEDLINE | ID: covidwho-20242312

ABSTRACT

Recent developments have enabled daily accumulated medical information to be converted into medical big data, and new evidence is expected to be created using databases and various open data sources. Database research using medical big data was actively conducted in the coronavirus disease 2019 (COVID-19) pandemic and created evidence for a new disease. Conversely, the new term "infodemic" has emerged and has become a social problem. Multiple posts on social networking services (SNS) overly stirred up safety concerns about the COVID-19 vaccines based on the analysis results of the Vaccine Adverse Event Reporting System (VAERS). Medical experts on SNS have attempted to correct these misunderstandings. Incidents where research papers about the COVID-19 treatment using medical big data were retracted due to the lack of reliability of the database also occurred. These topics of appropriate interpretation of results using spontaneous reporting databases and ensuring the reliability of databases are not new issues that emerged during the COVID-19 pandemic but issues that were present before. Thus, literacy regarding medical big data has become increasingly important. Research related to artificial intelligence (AI) is also progressing rapidly. Using medical big data is expected to accelerate AI development. However, as medical AI does not resolve all clinical setting problems, we also need to improve our medical AI literacy.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Big Data , COVID-19 Vaccines , Pandemics/prevention & control , COVID-19 Drug Treatment , Literacy , Reproducibility of Results
2.
J Med Internet Res ; 25: e44356, 2023 Jun 09.
Article in English | MEDLINE | ID: covidwho-20240023

ABSTRACT

BACKGROUND: Digital misinformation, primarily on social media, has led to harmful and costly beliefs in the general population. Notably, these beliefs have resulted in public health crises to the detriment of governments worldwide and their citizens. However, public health officials need access to a comprehensive system capable of mining and analyzing large volumes of social media data in real time. OBJECTIVE: This study aimed to design and develop a big data pipeline and ecosystem (UbiLab Misinformation Analysis System [U-MAS]) to identify and analyze false or misleading information disseminated via social media on a certain topic or set of related topics. METHODS: U-MAS is a platform-independent ecosystem developed in Python that leverages the Twitter V2 application programming interface and the Elastic Stack. The U-MAS expert system has 5 major components: data extraction framework, latent Dirichlet allocation (LDA) topic model, sentiment analyzer, misinformation classification model, and Elastic Cloud deployment (indexing of data and visualizations). The data extraction framework queries the data through the Twitter V2 application programming interface, with queries identified by public health experts. The LDA topic model, sentiment analyzer, and misinformation classification model are independently trained using a small, expert-validated subset of the extracted data. These models are then incorporated into U-MAS to analyze and classify the remaining data. Finally, the analyzed data are loaded into an index in the Elastic Cloud deployment and can then be presented on dashboards with advanced visualizations and analytics pertinent to infodemiology and infoveillance analysis. RESULTS: U-MAS performed efficiently and accurately. Independent investigators have successfully used the system to extract significant insights into a fluoride-related health misinformation use case (2016 to 2021). The system is currently used for a vaccine hesitancy use case (2007 to 2022) and a heat wave-related illnesses use case (2011 to 2022). Each component in the system for the fluoride misinformation use case performed as expected. The data extraction framework handles large amounts of data within short periods. The LDA topic models achieved relatively high coherence values (0.54), and the predicted topics were accurate and befitting to the data. The sentiment analyzer performed at a correlation coefficient of 0.72 but could be improved in further iterations. The misinformation classifier attained a satisfactory correlation coefficient of 0.82 against expert-validated data. Moreover, the output dashboard and analytics hosted on the Elastic Cloud deployment are intuitive for researchers without a technical background and comprehensive in their visualization and analytics capabilities. In fact, the investigators of the fluoride misinformation use case have successfully used the system to extract interesting and important insights into public health, which have been published separately. CONCLUSIONS: The novel U-MAS pipeline has the potential to detect and analyze misleading information related to a particular topic or set of related topics.


Subject(s)
COVID-19 , Social Media , Humans , Big Data , Artificial Intelligence , Ecosystem , Fluorides , Communication
3.
Front Public Health ; 11: 1029385, 2023.
Article in English | MEDLINE | ID: covidwho-20236976

ABSTRACT

Rapid urbanization has gradually strengthened the spatial links between cities, which greatly aggravates the possibility of the spread of an epidemic. Traditional methods lack the early and accurate detection of epidemics. This study took the Hubei province as the study area and used Tencent's location big data to study the spread of COVID-19. Using ArcGIS as a platform, the urban relation intensity, urban centrality, overlay analysis, and correlation analysis were used to measure and analyze the population mobility data of 17 cities in Hubei province. The results showed that there was high similarity in the spatial distribution of urban relation intensity, urban centrality, and the number of infected people, all indicating the spatial distribution characteristics of "one large and two small" distributions with Wuhan as the core and Huanggang and Xiaogan as the two wings. The urban centrality of Wuhan was four times higher than that of Huanggang and Xiaogan, and the urban relation intensity of Wuhan with Huanggang and Xiaogan was also the second highest in the Hubei province. Meanwhile, in the analysis of the number of infected persons, it was found that the number of infected persons in Wuhan was approximately two times that of these two cities. Through correlation analysis of the urban relation intensity, urban centrality, and the number of infected people, it was found that there was an extremely significant positive correlation among the urban relation intensity, urban centrality, and the number of infected people, with an R2 of 0.976 and 0.938, respectively. Based on Tencent's location big data, this study conducted the epidemic spread research for "epidemic spatial risk classification and prevention and control level selection" to make up for the shortcomings in epidemic risk analysis and judgment. This could provide a reference for city managers to effectively coordinate existing resources, formulate policy, and control the epidemic.


Subject(s)
COVID-19 , Epidemics , Animals , Humans , Big Data , COVID-19/epidemiology , Disease Outbreaks , Cities
4.
J Am Med Inform Assoc ; 30(7): 1323-1332, 2023 06 20.
Article in English | MEDLINE | ID: covidwho-2328343

ABSTRACT

OBJECTIVES: As the real-world electronic health record (EHR) data continue to grow exponentially, novel methodologies involving artificial intelligence (AI) are becoming increasingly applied to enable efficient data-driven learning and, ultimately, to advance healthcare. Our objective is to provide readers with an understanding of evolving computational methods and help in deciding on methods to pursue. TARGET AUDIENCE: The sheer diversity of existing methods presents a challenge for health scientists who are beginning to apply computational methods to their research. Therefore, this tutorial is aimed at scientists working with EHR data who are early entrants into the field of applying AI methodologies. SCOPE: This manuscript describes the diverse and growing AI research approaches in healthcare data science and categorizes them into 2 distinct paradigms, the bottom-up and top-down paradigms to provide health scientists venturing into artificial intelligent research with an understanding of the evolving computational methods and help in deciding on methods to pursue through the lens of real-world healthcare data.


Subject(s)
Artificial Intelligence , Physicians , Humans , Data Science , Big Data , Delivery of Health Care
5.
Int J Mol Sci ; 24(9)2023 Apr 24.
Article in English | MEDLINE | ID: covidwho-2320161

ABSTRACT

The recent advances in artificial intelligence (AI) and machine learning have driven the design of new expert systems and automated workflows that are able to model complex chemical and biological phenomena. In recent years, machine learning approaches have been developed and actively deployed to facilitate computational and experimental studies of protein dynamics and allosteric mechanisms. In this review, we discuss in detail new developments along two major directions of allosteric research through the lens of data-intensive biochemical approaches and AI-based computational methods. Despite considerable progress in applications of AI methods for protein structure and dynamics studies, the intersection between allosteric regulation, the emerging structural biology technologies and AI approaches remains largely unexplored, calling for the development of AI-augmented integrative structural biology. In this review, we focus on the latest remarkable progress in deep high-throughput mining and comprehensive mapping of allosteric protein landscapes and allosteric regulatory mechanisms as well as on the new developments in AI methods for prediction and characterization of allosteric binding sites on the proteome level. We also discuss new AI-augmented structural biology approaches that expand our knowledge of the universe of protein dynamics and allostery. We conclude with an outlook and highlight the importance of developing an open science infrastructure for machine learning studies of allosteric regulation and validation of computational approaches using integrative studies of allosteric mechanisms. The development of community-accessible tools that uniquely leverage the existing experimental and simulation knowledgebase to enable interrogation of the allosteric functions can provide a much-needed boost to further innovation and integration of experimental and computational technologies empowered by booming AI field.


Subject(s)
Artificial Intelligence , Deep Learning , Allosteric Site , Big Data , Proteins/chemistry
6.
Int J Environ Res Public Health ; 20(9)2023 05 08.
Article in English | MEDLINE | ID: covidwho-2319460

ABSTRACT

COVID-19 is a respiratory infectious disease that first reported in Wuhan, China, in December 2019. With COVID-19 spreading to patients worldwide, the WHO declared it a pandemic on 11 March 2020. This study collected 1,746,347 tweets from the Korean-language version of Twitter between February and May 2020 to explore future signals of COVID-19 and present response strategies for information diffusion. To explore future signals, we analyzed the term frequency and document frequency of key factors occurring in the tweets, analyzing the degree of visibility and degree of diffusion. Depression, digestive symptoms, inspection, diagnosis kits, and stay home obesity had high frequencies. The increase in the degree of visibility was higher than the median value, indicating that the signal became stronger with time. The degree of visibility of the mean word frequency was high for disinfectant, healthcare, and mask. However, the increase in the degree of visibility was lower than the median value, indicating that the signal grew weaker with time. Infodemic had a higher degree of diffusion mean word frequency. However, the mean degree of diffusion increase rate was lower than the median value, indicating that the signal grew weaker over time. As the general flow of signal progression is latent signal → weak signal → strong signal → strong signal with lower increase rate, it is necessary to obtain active response strategies for stay home, inspection, obesity, digestive symptoms, online shopping, and asymptomatic.


Subject(s)
COVID-19 , Social Media , Humans , COVID-19/epidemiology , SARS-CoV-2 , Big Data , China
7.
PLoS One ; 18(4): e0285212, 2023.
Article in English | MEDLINE | ID: covidwho-2294898

ABSTRACT

Recently big data and its applications had sharp growth in various fields such as IoT, bioinformatics, eCommerce, and social media. The huge volume of data incurred enormous challenges to the architecture, infrastructure, and computing capacity of IT systems. Therefore, the compelling need of the scientific and industrial community is large-scale and robust computing systems. Since one of the characteristics of big data is value, data should be published for analysts to extract useful patterns from them. However, data publishing may lead to the disclosure of individuals' private information. Among the modern parallel computing platforms, Apache Spark is a fast and in-memory computing framework for large-scale data processing that provides high scalability by introducing the resilient distributed dataset (RDDs). In terms of performance, Due to in-memory computations, it is 100 times faster than Hadoop. Therefore, Apache Spark is one of the essential frameworks to implement distributed methods for privacy-preserving in big data publishing (PPBDP). This paper uses the RDD programming of Apache Spark to propose an efficient parallel implementation of a new computing model for big data anonymization. This computing model has three-phase of in-memory computations to address the runtime, scalability, and performance of large-scale data anonymization. The model supports partition-based data clustering algorithms to preserve the λ-diversity privacy model by using transformation and actions on RDDs. Therefore, the authors have investigated Spark-based implementation for preserving the λ-diversity privacy model by two designed City block and Pearson distance functions. The results of the paper provide a comprehensive guideline allowing the researchers to apply Apache Spark in their own researches.


Subject(s)
Big Data , Software , Humans , Data Anonymization , Algorithms , Computational Biology
9.
PLoS Comput Biol ; 19(4): e1011083, 2023 04.
Article in English | MEDLINE | ID: covidwho-2306502

ABSTRACT

As infected and vaccinated population increases, some countries decided not to impose non-pharmaceutical intervention measures anymore and to coexist with COVID-19. However, we do not have a comprehensive understanding of its consequence, especially for China where most population has not been infected and most Omicron transmissions are silent. This paper aims to reveal the complete silent transmission dynamics of COVID-19 by agent-based simulations overlaying a big data of more than 0.7 million real individual mobility tracks without any intervention measures throughout a week in a Chinese city, with an extent of completeness and realism not attained in existing studies. Together with the empirically inferred transmission rate of COVID-19, we find surprisingly that with only 70 citizens to be infected initially, 0.33 million becomes infected silently at last. We also reveal a characteristic daily periodic pattern of the transmission dynamics, with peaks in mornings and afternoons. In addition, by inferring individual professions, visited locations and age group, we found that retailing, catering and hotel staff are more likely to get infected than other professions, and elderly and retirees are more likely to get infected at home than outside home.


Subject(s)
COVID-19 , Humans , Aged , COVID-19/epidemiology , COVID-19/prevention & control , SARS-CoV-2 , Big Data , Occupations , China/epidemiology
10.
Bioresour Technol ; 372: 128625, 2023 Mar.
Article in English | MEDLINE | ID: covidwho-2287473

ABSTRACT

Given the potential of machine learning algorithms in revolutionizing the bioengineering field, this paper examined and summarized the literature related to artificial intelligence (AI) in the bioprocessing field. Natural language processing (NLP) was employed to explore the direction of the research domain. All the papers from 2013 to 2022 with specific keywords of bioprocessing using AI were extracted from Scopus and grouped into two five-year periods of 2013-to-2017 and 2018-to-2022, where the past and recent research directions were compared. Based on this procedure, selected sample papers from recent five years were subjected to further review and analysis. The result shows that 50% of the publications in the past five-year focused on topics related to hybrid models, ANN, biopharmaceutical manufacturing, and biorefinery. The summarization and analysis of the outcome indicated that implementing AI could improve the design and process engineering strategies in bioprocessing fields.


Subject(s)
Artificial Intelligence , Big Data , Machine Learning , Algorithms , Natural Language Processing
11.
BMJ Open ; 13(2): e071261, 2023 02 17.
Article in English | MEDLINE | ID: covidwho-2262770

ABSTRACT

INTRODUCTION: The impact of long COVID on health-related quality of-life (HRQoL) and productivity is not currently known. It is important to understand who is worst affected by long COVID and the cost to the National Health Service (NHS) and society, so that strategies like booster vaccines can be prioritised to the right people. OpenPROMPT aims to understand the impact of long COVID on HRQoL in adults attending English primary care. METHODS AND ANALYSIS: We will ask people to participate in this cohort study through a smartphone app (Airmid), and completing a series of questionnaires held within the app. Questionnaires will ask about HRQoL, productivity and symptoms of long COVID. Participants will be asked to fill in the questionnaires once a month, for 90 days. Questionnaire responses will be linked, where possible, to participants' existing health records from primary care, secondary care, and COVID testing and vaccination data. Analysis will take place using the OpenSAFELY data platform and will estimate the impact of long COVID on HRQoL, productivity and cost to the NHS. ETHICS AND DISSEMINATION: The Proportionate Review Sub-Committee of the South Central-Berkshire B Research Ethics Committee has reviewed and approved the study and have agreed that we can ask people to take part (22/SC/0198). Our results will provide information to support long-term care, and make recommendations for prevention of long COVID in the future. TRIAL REGISTRATION NUMBER: NCT05552612.


Subject(s)
COVID-19 , Mobile Applications , Adult , Humans , Big Data , Cohort Studies , COVID-19/prevention & control , COVID-19 Testing , Patient Reported Outcome Measures , Post-Acute COVID-19 Syndrome , Smartphone , State Medicine
12.
J Healthc Eng ; 2023: 4301745, 2023.
Article in English | MEDLINE | ID: covidwho-2259501

ABSTRACT

The infectious coronavirus disease (COVID-19) has become a great threat to global human health. Timely and rapid detection of COVID-19 cases is very crucial to control its spreading through isolation measures as well as for proper treatment. Though the real-time reverse transcription-polymerase chain reaction (RT-PCR) test is a widely used technique for COVID-19 infection, recent researches suggest chest computed tomography (CT)-based screening as an effective substitute in cases of time and availability limitations of RT-PCR. In consequence, deep learning-based COVID-19 detection from chest CT images is gaining momentum. Furthermore, visual analysis of data has enhanced the opportunities of maximizing the prediction performance in this big data and deep learning realm. In this article, we have proposed two separate deformable deep networks converting from the conventional convolutional neural network (CNN) and the state-of-the-art ResNet-50, to detect COVID-19 cases from chest CT images. The impact of the deformable concept has been observed through performance comparative analysis among the designed deformable and normal models, and it is found that the deformable models show better prediction results than their normal form. Furthermore, the proposed deformable ResNet-50 model shows better performance than the proposed deformable CNN model. The gradient class activation mapping (Grad-CAM) technique has been used to visualize and check the targeted regions' localization effort at the final convolutional layer and has been found excellent. Total 2481 chest CT images have been used to evaluate the performance of the proposed models with a train-valid-test data splitting ratio of 80 : 10 : 10 in random fashion. The proposed deformable ResNet-50 model achieved training accuracy of 99.5% and test accuracy of 97.6% with specificity of 98.5% and sensitivity of 96.5% which are satisfactory compared with related works. The comprehensive discussion demonstrates that the proposed deformable ResNet-50 model-based COVID-19 detection technique can be useful for clinical applications.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Tomography, X-Ray Computed , Big Data , Motion , Neural Networks, Computer
13.
Int J Environ Res Public Health ; 20(5)2023 02 22.
Article in English | MEDLINE | ID: covidwho-2269303

ABSTRACT

In recent years, there has been a growing amount of discussion on the use of big data to prevent and treat pandemics. The current research aimed to use CiteSpace (CS) visual analysis to uncover research and development trends, to help academics decide on future research and to create a framework for enterprises and organizations in order to plan for the growth of big data-based epidemic control. First, a total of 202 original papers were retrieved from Web of Science (WOS) using a complete list and analyzed using CS scientometric software. The CS parameters included the date range (from 2011 to 2022, a 1-year slice for co-authorship as well as for the co-accordance assessment), visualization (to show the fully integrated networks), specific selection criteria (the top 20 percent), node form (author, institution, region, reference cited, referred author, journal, and keywords), and pruning (pathfinder, slicing network). Lastly, the correlation of data was explored and the findings of the visualization analysis of big data pandemic control research were presented. According to the findings, "COVID-19 infection" was the hottest cluster with 31 references in 2020, while "Internet of things (IoT) platform and unified health algorithm" was the emerging research topic with 15 citations. "Influenza, internet, China, human mobility, and province" were the emerging keywords in the year 2021-2022 with strength of 1.61 to 1.2. The Chinese Academy of Sciences was the top institution, which collaborated with 15 other organizations. Qadri and Wilson were the top authors in this field. The Lancet journal accepted the most papers in this field, while the United States, China, and Europe accounted for the bulk of articles in this research. The research showed how big data may help us to better understand and control pandemics.


Subject(s)
COVID-19 , Humans , United States , Data Science , Europe , Big Data , Pandemics
14.
PLoS One ; 18(3): e0282587, 2023.
Article in English | MEDLINE | ID: covidwho-2272812

ABSTRACT

BACKGROUND: The COVID-19 pandemic has demonstrated the need for efficient and comprehensive, simultaneous assessment of multiple combined novel therapies for viral infection across the range of illness severity. Randomized Controlled Trials (RCT) are the gold standard by which efficacy of therapeutic agents is demonstrated. However, they rarely are designed to assess treatment combinations across all relevant subgroups. A big data approach to analyzing real-world impacts of therapies may confirm or supplement RCT evidence to further assess effectiveness of therapeutic options for rapidly evolving diseases such as COVID-19. METHODS: Gradient Boosted Decision Tree, Deep and Convolutional Neural Network classifiers were implemented and trained on the National COVID Cohort Collaborative (N3C) data repository to predict the patients' outcome of death or discharge. Models leveraged the patients' characteristics, the severity of COVID-19 at diagnosis, and the calculated proportion of days on different treatment combinations after diagnosis as features to predict the outcome. Then, the most accurate model is utilized by eXplainable Artificial Intelligence (XAI) algorithms to provide insights about the learned treatment combination impacts on the model's final outcome prediction. RESULTS: Gradient Boosted Decision Tree classifiers present the highest prediction accuracy in identifying patient outcomes with area under the receiver operator characteristic curve of 0.90 and accuracy of 0.81 for the outcomes of death or sufficient improvement to be discharged. The resulting model predicts the treatment combinations of anticoagulants and steroids are associated with the highest probability of improvement, followed by combined anticoagulants and targeted antivirals. In contrast, monotherapies of single drugs, including use of anticoagulants without steroid or antivirals are associated with poorer outcomes. CONCLUSIONS: This machine learning model by accurately predicting the mortality provides insights about the treatment combinations associated with clinical improvement in COVID-19 patients. Analysis of the model's components suggests benefit to treatment with combination of steroids, antivirals, and anticoagulant medication. The approach also provides a framework for simultaneously evaluating multiple real-world therapeutic combinations in future research studies.


Subject(s)
COVID-19 , SARS-CoV-2 , Humans , Big Data , Antiviral Agents/therapeutic use , Anticoagulants
15.
Front Public Health ; 11: 1061307, 2023.
Article in English | MEDLINE | ID: covidwho-2270531

ABSTRACT

Background: Concerns about the role of chronically used medications in the clinical outcomes of the coronavirus disease 2019 (COVID-19) have remarkable potential for the breakdown of non-communicable diseases (NCDs) management by imposing ambivalence toward medication continuation. This study aimed to investigate the association of single or combinations of chronically used medications in NCDs with clinical outcomes of COVID-19. Methods: This retrospective study was conducted on the intersection of two databases, the Iranian COVID-19 registry and Iran Health Insurance Organization. The primary outcome was death due to COVID-19 hospitalization, and secondary outcomes included length of hospital stay, Intensive Care Unit (ICU) admission, and ventilation therapy. The Anatomical Therapeutic Chemical (ATC) classification system was used for medication grouping. The frequent pattern growth algorithm was utilized to investigate the effect of medication combinations on COVID-19 outcomes. Findings: Aspirin with chronic use in 10.8% of hospitalized COVID-19 patients was the most frequently used medication, followed by Atorvastatin (9.2%) and Losartan (8.0%). Adrenergics in combination with corticosteroids inhalants (ACIs) with an odds ratio (OR) of 0.79 (95% confidence interval: 0.68-0.92) were the most associated medications with less chance of ventilation therapy. Oxicams had the least OR of 0.80 (0.73-0.87) for COVID-19 death, followed by ACIs [0.85 (0.77-0.95)] and Biguanides [0.86 (0.82-0.91)]. Conclusion: The chronic use of most frequently used medications for NCDs management was not associated with poor COVID-19 outcomes. Thus, when indicated, physicians need to discourage patients with NCDs from discontinuing their medications for fear of possible adverse effects on COVID-19 prognosis.


Subject(s)
COVID-19 , Humans , SARS-CoV-2 , Retrospective Studies , Big Data , Iran , Outcome Assessment, Health Care
16.
Comput Intell Neurosci ; 2023: 5212712, 2023.
Article in English | MEDLINE | ID: covidwho-2269979

ABSTRACT

Network public opinion represents public social opinion to a certain extent and has an important impact on formulating national policies and judgment. Therefore, China and other countries attach great importance to the study of online public opinion. However, the current researches lack the combination of theory and practical cases and lack the intersection of social and natural sciences. This work aims to overcome the technical defects of traditional management systems, break through the difficulties and pain points of existing network public opinion risk management, and improve the efficiency of network public opinion risk management. Firstly, a network public opinion isolation strategy based on the infectious disease propagation model is proposed, and the optimal control theory is used to realize a functional control model to maximize social utility. Secondly, blockchain technology is used to build a network public opinion risk management system. The system is used to conduct a detailed study on identifying and perceiving online public opinion risk. Finally, a Chinese word segmentation scheme based on Long Short-Term Memory (LSTM) network model and a text emotion recognition scheme based on a convolutional neural network are proposed. Both schemes are validated on a typical corpus. The results show that when the system has a control strategy, the number of susceptible drops significantly. Two days after the public opinion is generated, the number of susceptible people decreased from 1,000 to 250; 3 days after the public opinion is generated, the number of susceptible people stabilized. 2 days after the public opinion is generated, the number of lurkers increased from 100 to 620; 3 days after the public opinion is generated, the number of lurkers stabilized. The data demonstrate that the designed isolation control strategy is effective. Changes in public opinion among infected people show that quarantine control strategies played a significant role in the early days of Corona Virus Disease 2019. The rate of change in the number of infections is more affected when quarantine controls are increased, especially in the days leading up to the outbreak. When the system adopts the optimal control strategy, the influence scope of public opinion becomes smaller, and the control becomes easier. When the dimension of the word vector of emergent events is 200, its accuracy may be higher. This method provides certain ideas for blockchain and deep learning technology in network public opinion control.


Subject(s)
Blockchain , COVID-19 , Humans , Public Opinion , Big Data , Technology
17.
Front Public Health ; 11: 1112547, 2023.
Article in English | MEDLINE | ID: covidwho-2286012

ABSTRACT

Big data technology plays an important role in the prevention and control of public health emergencies such as the COVID-19 pandemic. Current studies on model construction, such as SIR infectious disease model, 4R crisis management model, etc., have put forward decision-making suggestions from different perspectives, which also provide a reference basis for the research in this paper. This paper conducts an exploratory study on the construction of a big data prevention and control model for public health emergencies by using the grounded theory, a qualitative research method, with literature, policies, and regulations as research samples, and makes a grounded analysis through three-level coding and saturation test. Main results are as follows: (1) The three elements of data layer, subject layer, and application layer play a prominent role in the digital prevention and control practice of epidemic in China and constitute the basic framework of the "DSA" model. (2) The "DSA" model integrates cross-industry, cross-region, and cross-domain epidemic data into one system framework, effectively solving the disadvantages of fragmentation caused by "information island". (3) The "DSA" model analyzes the differences in information needs of different subjects during an outbreak and summarizes several collaborative approaches to promote resource sharing and cooperative governance. (4) The "DSA" model analyzes the specific application scenarios of big data technology in different stages of epidemic development, effectively responding to the disconnection between current technological development and realistic needs.


Subject(s)
COVID-19 , Public Health , Humans , Public Health/methods , COVID-19/epidemiology , COVID-19/prevention & control , Emergencies , Big Data , Pandemics/prevention & control , Grounded Theory
18.
Viruses ; 15(3)2023 03 09.
Article in English | MEDLINE | ID: covidwho-2259356

ABSTRACT

SARS-CoV-2 genomic sequencing has peaked to unprecedented compared to other viruses [...].


Subject(s)
COVID-19 , Viruses , Humans , SARS-CoV-2/genetics , COVID-19/genetics , Big Data , Viruses/genetics , Genome, Viral
19.
J R Soc Med ; 116(1): 3, 2023 01.
Article in English | MEDLINE | ID: covidwho-2247748

Subject(s)
Big Data , Technology , Humans
20.
J Med Internet Res ; 25: e42401, 2023 01 16.
Article in English | MEDLINE | ID: covidwho-2246288

ABSTRACT

BACKGROUND: Due to the emergency responses early in the COVID-19 pandemic, the use of digital health in health care increased abruptly. However, it remains unclear whether this introduction was sustained in the long term, especially with patients being able to decide between digital and traditional health services once the latter regained their functionality throughout the COVID-19 pandemic. OBJECTIVE: We aim to understand how the public interest in digital health changed as proxy for digital health-seeking behavior and to what extent this change was sustainable over time. METHODS: We used an interrupted time-series analysis of Google Trends data with break points on March 11, 2020 (declaration of COVID-19 as a pandemic by the World Health Organization), and December 20, 2020 (the announcement of the first COVID-19 vaccines). Nationally representative time-series data from February 2019 to August 2021 were extracted from Google Trends for 6 countries with English as their dominant language: Canada, the United States, the United Kingdom, New Zealand, Australia, and Ireland. We measured the changes in relative search volumes of the keywords online doctor, telehealth, online health, telemedicine, and health app. In doing so, we capture the prepandemic trend, the immediate change due to the announcement of COVID-19 being a pandemic, and the gradual change after the announcement. RESULTS: Digital health search volumes immediately increased in all countries under study after the announcement of COVID-19 being a pandemic. There was some variation in what keywords were used per country. However, searches declined after this immediate spike, sometimes reverting to prepandemic levels. The announcement of COVID-19 vaccines did not consistently impact digital health search volumes in the countries under study. The exception is the search volume of health app, which was observed as either being stable or gradually increasing during the pandemic. CONCLUSIONS: Our findings suggest that the increased public interest in digital health associated with the pandemic did not sustain, alluding to remaining structural barriers. Further building of digital health capacity and developing robust digital health governance frameworks remain crucial to facilitating sustainable digital health transformation.


Subject(s)
COVID-19 , Humans , United States , COVID-19/epidemiology , COVID-19/prevention & control , Pandemics/prevention & control , COVID-19 Vaccines , Search Engine , Big Data , Patient Acceptance of Health Care
SELECTION OF CITATIONS
SEARCH DETAIL